Session 7-G

SDN III

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 9:00 AM — 10:30 AM EDT

AudiSDN: Automated Detection of Network Policy Inconsistencies in Software-Defined Networks

Seungsoo Lee (KAIST, Korea (South)); Seungwon Woo (ETRI, Korea (South)); Jinwoo Kim (KAIST, Korea (South)); Vinod Yegneswaran and Phillip A Porras (SRI International, USA); Seungwon Shin (KAIST, Korea (South))

2
At the foundation of every network security architecture lies the premise that formulated network flow policies are reliably deployed and enforced by the network infrastructure. However, software-defined networks (SDNs) add a particular challenge to satisfying this premise, as for SDNs the flow policy implementation spans multiple applications and abstraction layers across the SDN stack. In this paper, we focus on the question of how to automatically identify cases in which the SDN stack fails to prevent policy inconsistencies from arising among these components. This question is rather essential, as when such inconsistencies arise the implications to the security and reliability of the network are devastating. We present AudiSDN, an automated fuzz-testing framework designed to formulate test cases in which policy inconsistencies can arise in OpenFlow networks, the most prevalent SDN protocol used today. We also present results from applying AudiSDN to two widely used SDN controllers, Floodlight and ONOS. In fact, our test results have led to the filing of 3 separate CVE reports. We believe that the approach presented in this paper is applicable to the breadth of OpenFlow platforms used today, and that its broader usage will help to address a serious but yet understudied pragmatic concern.

Inferring Firewall Rules by Cache Side-channel Analysis in Network Function Virtualization

Youngjoo Shin (Kwangwoon University, Korea (South)); Dongyoung Koo (Hansung University, Korea (South)); Junbeom Hur (Korea University, Korea (South))

2
Network function virtualization takes advantage of virtualization technology to achieve flexibility in network service provisioning. However, it comes at the cost of security risks caused by cache side-channel attacks on virtual machines. In this study, we investigate the security impact of these attacks on virtualized network functions. In particular, we propose a novel cache-based reconnaissance technique against virtualized Linux-based firewalls. The proposed technique has significant advantages in the perspective of attackers. First, it enhances evasiveness against intrusion detection owing to the ability of source spoofing. Second, it allows inference on a wide variety of filtering rules. During experiment in VyOS, the proposed method could infer the firewall rules with an accuracy of more than 90% by using only a few dozen packets. We also present countermeasures to mitigate cache-based attacks on virtualized network functions.

Multicast Traffic Engineering with Segment Trees in Software-Defined Networks

Chih-Hang Wang and Sheng-Hao Chiang (Academia Sinica, Taiwan); Shan-Hsiang Shen (National Taiwan University of Science and Technology, Taiwan); De-Nian Yang (Academia Sinica, Taiwan); Wen-Tsuen Chen (National Tsing Hua University, Taiwan)

1
Previous research on Segment Routing (SR) mostly focused on unicast, whereas online SDN multicast with segment trees supporting IETF dynamic group membership has not been explored. Compared with traditional unicast SR, online SDN multicast with segment trees is more challenging since finding an appropriate size, shape, and location for each segment tree is crucial to deploy it in more multicast trees. In this paper, we explore Multi-tree Multicast Segment Routing (MMSR) to jointly minimize the bandwidth consumption and forwarding rule updates over time by leveraging segment trees. We prove MMSR is NP-hard and design a competitive algorithm, STRUS to achieve the tightest bound. STRUS includes STR and STP to merge smaller overlapping subtrees into segment trees, and then tailor them to serve more multicast trees. We design Stability Indicator and Reusage Indicator to carefully construct segment trees at the backbone of multicast trees and reroutes multicast trees to span more segment trees. Simulation and implementation on real SDNs with YouTube traffic manifest that STRUS outperforms the state-of-the-art algorithms regarding the total cost and TCAM usage. Moreover, the running time of STRUS is no more than 1 second for massive networks with thousands of nodes and therefore is practical for SDN.

SDN-based Order-aware Live Migration of Virtual Machines

Dinuni Fernando, Ping Yang and Hui Lu (Binghamton University, USA)

2
Live migration is a key technique to transfer virtual machines (VMs) from one machine to another. Often multiple VMs need to be migrated in response to events such as server maintenance, load balancing, and impending failures. However, VM migration is a resource intensive operation, which pressures the CPU, memory, and network resources of the source and destination hosts as well as intermediate network links. The live migration mechanism ends up contending for finite resources with the VMs that it needs to migrate, which prolongs the total migration time and worsens the performance of applications running inside the VMs. In this paper, we propose SOLive, a new approach to reduce resource contention between the migration process and the VMs being migrated. First, by considering the nature of VM workloads, SOLive manages the migration order to significantly reduce the total migration time. Secondly, to reduce network contention between the migration process and the VMs, SOLive uses a combination of software-defined networking-based resource reservation and scatter gather-based VM migration to quickly deprovision the source host. A prototype implementation of our approach in KVM/QEMU platform shows that SOLive quickly evicts VMs from the source host with low impact on VMs' performance.

Session Chair

Jin Zhao (Fudan University)

Session 8-G

Edge Computing II

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM EDT

Collaborate or Separate? Distributed Service Caching in Mobile Edge Clouds

Zichuan Xu and Lizhen Zhou (Dalian University of Technology, China); Sid Chi-Kin Chau (Australian National University, Australia); Weifa Liang (The Australian National University, Australia); Qiufen Xia (Dalian University of Technology, China); Pan Zhou (Huazhong University of Science and Technology, China)

1
With the development of 5G technology, mobile edge computing is emerging as an enabling technique to promote Quality of Service (QoS) of network services. Service providers are caching services to the edge cloud. In this paper, we study the problem of service caching in mobile edge network under a mobile service market with multiple network service providers completing for both computation and bandwidth resources of the edge cloud. For the problem without resource sharing among network service providers, we propose an Integer Linear Program (ILP) and a randomized approximation algorithm via randomized rounding. For the problem with resource sharing, we devise a distributed and stable game-theoretical mechanism with the aim to minimize the social cost of all service providers, by introducing a novel cost sharing model and a coalition formation game. We analyze the performance of the mechanism by showing a good guaranteed gap between the solution obtained and the social optimum, i.e., Strong Price of Anarchy (SPoA).

Cooperative Service Caching and Workload Scheduling in Mobile Edge Computing

Xiao Ma (Beijing University of Posts and Telecommunications, China); Ao Zhou (Beijing University of Posts & Telecommunications, China); Shan Zhang (Beihang University, China); Shangguang Wang (Beijing University of Posts and Telecommunications, China)

1
Mobile edge computing is beneficial to reduce service response time and core network traffic by pushing cloud functionalities to network edge. Equipped with storage and computation capacities, edge nodes can cache services of resource-intensive and delay-sensitive mobile applications and process the corresponding computation tasks without outsourcing to central clouds. However, the heterogeneity of edge resource capacities and inconsistence of edge storage and computation capacities make it difficult to jointly fully utilize the storage and computation capacities when there is no cooperation among edge nodes. To address this issue, we consider cooperation among edge nodes and investigate cooperative service caching and workload scheduling in mobile edge computing. This problem can be formulated as a mixed integer nonlinear programming problem, which has non-polynomial computation complexity. To overcome the challenges of subproblem coupling, computation-communication tradeoff, and edge node heterogeneity, we develop an iterative algorithm called ICE. This algorithm is designed based on Gibbs sampling, which has provably near-optimal results, and the idea of water filling, which has polynomial computation complexity. Simulations are conducted and the results demonstrate that our algorithm can jointly reduce the service response time and the outsourcing traffic compared with the benchmark algorithms.

Joint Optimization of Signal Design and Resource Allocation in Wireless D2D Edge Computing

Junghoon Kim (Purdue University, USA); Taejoon Kim and Morteza Hashemi (University of Kansas, USA); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); David Love (Purdue University, USA)

3
In this paper, we study the distributed computational capabilities of device-to-device (D2D) networks. A key characteristic of D2D networks is that their topologies are reconfigurable to cope with network demands. For distributed computing, resource management is challenging due to limited network and communication resources, leading to inter-channel interference. To overcome this, recent research has addressed the problems of network scheduling, subchannel allocation, and power allocation, but has not considered them jointly. In this paper, unlike previous mobile edge computing (MEC) approaches, we propose a joint optimization of wireless signal design and network resource allocation to maximizing energy efficiency. Given that the resulting problem is a non-convex mixed integer program (MIP) which is infeasible to solve at scale, we decompose its solution into two parts: (i) a resource allocation subproblem, which optimizes the link selection and subchannel allocations, and (ii) signal design subproblem, which optimizes the transmit beamformer, transmit power, and receive combiner. Simulation results on wireless edge topologies show that our method yields substantial improvements in energy efficiency compared with cases of no offloading and partially optimized methods, and that the efficiency scales well with the size of the network.

INFOCOM 2020 Best Paper: Reducing the Service Function Chain Backup Cost over the Edge and Cloud by a Self-adapting Scheme

Xiaojun Shang, Yaodong Huang, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

7
The fast development of virtual network functions (VNFs) brings new opportunities to network service deployment on edge networks. For complicated services, VNFs can chain up to form service function chains (SFCs). Despite the promises, it is still not clear how to backup VNFs to minimize the cost while meeting the SFC availability requirements in an online manner. In this paper, we propose a novel self-adapting scheme named SAB to efficiently backup VNFs over both the edge and the cloud. Specifically, SAB uses both static backups and dynamic ones created on the fly to accommodate the resource limitation of edge networks. For each VNF backup, SAB determines whether to place it on the edge or the cloud, and if on the edge, which edge server to use for load balancing. SAB does not assume failure rates of VNFs but instead strives to find the sweet point between the desired availability of SFCs and the backup cost. Both theoretical performance bounds and extensive simulation results highlight that SAB provides significantly higher availability with lower backup costs compared with existing baselines.

Session Chair

Jiangchuan Liu (Simon Fraser University)

Session 9-G

SDN IV

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 2:00 PM — 3:30 PM EDT

HiFi: Hybrid Rule Placement for Fine-Grained Flow Management in SDNs

Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Jingyuan Fan (State University of New York at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China); Chunming Qiao (University at Buffalo, USA)

3
Fine-grained flow management is very useful in many practical applications, e.g., resource allocation, anomaly detection and traffic engineering. However, it is difficult to provide fine-grained management for a large number of flows in SDNs due to switches' limited flow table capacity. While using wildcard rules can reduce the number of flow entries needed, it cannot fully ensure fine-grained management for all the flows without degrading application performance. In this paper, we design and implement HiFi, a system that achieves fine-grained management with a minimal number of flow entries. HiFi achieves the goal by taking a two-step approach: wildcard entry installment and application-specific exact-match entry installment. How to optimally install wildcard and exact-match flow entries, however, are intractable. Therefore, we design approximation algorithms with bounded factors to solve these problems. We consider how to achieve network-wide load balancing via fine-grained flow management as a case study. Both experimental and simulation results show that HiFi can reduce the number of required flow entries by about 45%-69% and reduce the control overhead by 28%-50% compared with the state-of-the-art approaches for achieving fine-grained flow management.

Homa: An Efficient Topology and Route Management Approach in SD-WAN Overlays

Diman Zad Tootaghaj and Faraz Ahmed (Hewlett Packard Labs, USA); Puneet Sharma (Hewlett Packard Labs & HP Labs, USA); Mihalis Yannakakis (Columbia University, USA)

1
This paper presents an efficient topology and route management approach in Software-Defined Wide Area Networks (SD-WAN). Traditional WANs suffer from low utilization and lack of global view of the network. Therefore, during failures, topology/service/traffic changes, or new policy requirements, the system does not always converge to the global optimal state. Using Software Defined Networking architectures in WANs provides the opportunity to design WANs with higher fault tolerance, scalability, and manageability. We exploit the correlation matrix derived from monitoring system between the virtual links to infer the underlying route topology and propose a route update approach that minimizes the total route update cost on all flows. We formulate the problem as an integer linear programming optimization problem and provide a centralized control approach that minimizes the total cost while satisfying the quality of service (QoS) on all flows. Experimental results on real network topologies demonstrate the effectiveness of the proposed approach in terms of disruption cost and average disrupted flows.

Incremental Server Deployment for Scalable NFV-enabled Networks

Jianchun Liu, Hongli Xu and Gongming Zhao (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA); Xingpeng Fan and Liusheng Huang (University of Science and Technology of China, China)

3
To construct a new NFV-enabled network, there are two critical requirements: minimizing server deployment cost and satisfying switch resource constraints. However, prior work mostly focuses on the server deployment cost, while ignoring the switch resource constraints (e.g., switch's flow-table size). It thus results in a large number of rules on switches and leads to massive control overhead. To address this challenge, we propose an incremental server deployment (INSD) problem for construction of scalable NFV-enabled networks. We prove that the INSD problem is NP-Hard, and there is no polynomial-time algorithm with approximation ratio of (1−\epsilon)·ln m, where \epsilon is an arbitrarily small value and m is the number of requests in the network. We then present an efficient algorithm with an approximation ratio of 2·H(q·p), where q is the number of VNF's categories and p is the maximum number of requests through a switch. We evaluate the performance of our algorithm with experiments on physical platform (Pica8), Open vSwitches (OVSes), and large-scale simulations. Both experiment and simulation results show high scalability of the proposed algorithm. For example, our solution can reduce the control and rule overhead by about 88% with about 5% additional server deployment, compared with the existing solutions.

Network Slicing in Heterogeneous Software-defined RANs

Qiaofeng Qin (Yale University, USA); Nakjung Choi (Nokia & Bell Labs, USA); Muntasir Raihan Rahman (Microsoft, USA); Marina Thottan (Bell Labs, USA); Leandros Tassiulas (Yale University, USA)

1
5G technologies promise to revolutionize mobile networks and push them to the limits of resource utilization. Besides better capacity, we also need better resource management via virtualization. End-to-end network slicing not only involves the core but also the Radio Access Network (RAN) which makes this a challenging problem. This is because multiple alternative radio access technologies exist (e.~g.~,LTE, WLAN, and WiMAX), and there is no unifying abstraction to compare and compose from diverse technologies. In addition, existing work assumes the all RAN infrastructure exists under a single administrative domain. Software-Defined Radio Access Network (SD-RAN) offers programmability that facilitates a unified abstraction for resource sharing and composition across multiple providers harnessing different technology stacks. In this paper we propose a new architecture for heterogeneous RAN slicing across multiple providers. A central component in our architecture is a service orchestrator that interacts with multiple network providers and service providers to negotiate resource allocations that are jointly optimal. We propose a double auction mechanism that captures the interaction among selfish parties and guarantees convergence to optimal social welfare in finite time. We then demonstrate the feasibility of our proposed system by using open source SD-RAN systems such as EmPOWER (WiFi) and FlexRAN (LTE).

Session Chair

Tamer Nadeem (Virginia Commonwealth University)

Session 10-G

Edge Computing III

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 4:00 PM — 5:30 PM EDT

A Fast Hybrid Data Sharing Framework for Hierarchical Mobile Edge Computing

Junjie Xie and Deke Guo (National University of Defense Technology, China); Xiaofeng Shi and Haofan Cai (University of California Santa Cruz, USA); Chen Qian (University of California at Santa Cruz, USA); Honghui Chen (National University of Defense Technology, China)

0
Edge computing satisfies the stringent latency requirements of data access and processing for applications running on edge devices. The data location service is a key function to provide data storage and retrieval to enable these applications. However, it still lacks research of a scalable and low-latency data location service in mobile edge computing. Meanwhile, the existing solutions, such as DNS and DHT, fail to meet the low latency requirement of mobile edge computing. This paper presents a low-latency hybrid data sharing framework, HDS, in which the data location service is divided into two parts: intra-region and inter-region. In the intra-region part, we design a data sharing protocol called Cuckoo Summary to achieve fast data localization. In the inter-region part, we design a geographic routing based scheme to achieve efficient data localization with only one overlay hop. The advantages of HDS include short response latency, low implementation overhead, and few false positives. We implement our HDS framework based on a P4 prototype. The experimental results show that, compared to the state-of-the-art solutions, our design achieves 50.21% shorter lookup paths and 92.75% fewer false positives.

Data-driven Distributionally Robust Optimization for Edge Intelligence

Zhaofeng Zhang, Sen Lin, Mehmet Dedeoglu, Kemi Ding and Junshan Zhang (Arizona State University, USA)

2
The past few years have witnessed the explosive growth of Internet of Things (IoT) devices. The necessity of real-time edge intelligence for IoT applications indicates that decision making must take place right here right now at the network edge, thus dictating that a high percentage of the IoT created data should be stored and analyzed locally. However, the computing resources are constrained and the amount of local data is often very limited at edge nodes. To tackle these challenges, we propose a distributionally robust optimization (DRO)-based edge intelligence framework, which is based on an innovative synergy of cloud knowledge transfer and local learning. More specifically, the knowledge transfer from the cloud learning is in the form of a reference distribution and its associated uncertainty set. Further, based on its local data, the edge device constructs an uncertainty set centered around its empirical distribution. The edge learning problem is then cast as a DRO problem subject to the above two distribution uncertainty sets. Building on this framework, we investigate two problem formulations for DRO-based edge intelligence, where the uncertainty sets are constructed using the Kullback-Leibler divergence and the Wasserstein distance, respectively. Numerical results demonstrate the effectiveness of the proposed DRO-based framework.

Delay-Optimal Distributed Edge Computing in Wireless Edge Networks

Xiaowen Gong (Auburn University, USA)

2
Distributed edge computing (DEC) makes use of distributed devices in the edge network to perform computing in parallel. By integrating distributed computing with edge computing, DEC can reduce service delays compared to each of the two paradigms. In this paper, we explore DEC that exploits edge devices connected by a wireless network to perform distributed computing. In particular, we study the fundamental problem of minimizing the delay of executing a distributed algorithm. We first establish some structural properties of the optimal communication scheduling, which show that it is optimal to be non-preemptive, be non-idle, and schedule forward communications before backward communications. Then, based on these properties, we characterize the optimal computation allocation which can be found by an efficient algorithm. Next, based on the optimal computation allocation, we characterize the optimal scheduling order of communications for some cases, and develop an efficient algorithm with a finite approximation ratio to find it for the general case. Last, based on the optimal computation allocation and communication scheduling, we further show that the optimal selection of devices can be found efficiently for some cases. Our results provide useful insights into the optimal policies. We evaluate the performance of the theoretical findings using simulations.

Fog Integration with Optical Access Networks from an Energy Efficiency Perspective

Ahmed Helmy and Amiya Nayak (University of Ottawa, Canada)

0
Access networks have been continuously going through many reformations to make them better suited for many demanding applications and adapt to arising traffic trends. On one hand, incorporating fog and edge computing has become a necessity for alleviating network congestions and supporting numerous applications that can no longer rely on the resources of a remote cloud. On the other hand, energy-efficiency has become a strong imperative for telecommunication networks, which aims to reduce both their operational costs and carbon footprint, but often leads to some degradation in the network performance. In this paper, we address these two challenges by examining the integration of fog computing with optical access networks under power-conserving frameworks. As most power-conserving frameworks in the literature are centralized-based, we also propose a novel decentralized-based energy-efficient framework and compare its performance against its centralized counterpart in a long-reach passive optical network (LR-PON). We study the possible cloudlet placements and the offloading performance in each allocation paradigm to see which is able to meet the requirements of next-generation access networks by having better network performance and less energy consumption.

Session Chair

Marie-Jose Montpetit (MJMontpetit.com)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.